Dualing GANs

نویسندگان

  • Yujia Li
  • Alexander G. Schwing
  • Kuan-Chieh Wang
  • Richard S. Zemel
چکیده

Generative adversarial nets (GANs) are a promising technique for modeling a distribution from samples. It is however well known that GAN training suffers from instability due to the nature of its saddle point formulation. In this paper, we explore ways to tackle the instability problem by dualizing the discriminator. We start from linear discriminators in which case conjugate duality provides a mechanism to reformulate the saddle point objective into a maximization problem, such that both the generator and the discriminator of this ‘dualing GAN’ act in concert. We then demonstrate how to extend this intuition to non-linear formulations. For GANs with linear discriminators our approach is able to remove the instability in training, while for GANs with nonlinear discriminators our approach provides an alternative to the commonly used GAN training algorithm.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Generative Adversarial Networks (GANs): What it can generate and What it cannot?

Why are Generative Adversarial Networks (GANs) so popular? What is the purpose of designing GANs? Can we justify functioning of GANs theoretically? How are the theoretical guarantees? Are there any shortcomings? With the popularity of GANs, the researchers across the globe have been perplexed by these questions. In the last year (2017), a plethora of research papers attempted to answer the abov...

متن کامل

Memorization Precedes Generation: Learning Unsupervised Gans

We propose an approach to address two undesired properties of unsupervised GANs. First, since GANs use only a continuous latent distribution to embed multiple classes or clusters of a dataset, GANs often do not correctly handle the structural discontinuity between disparate classes in a latent space. Second, discriminators of GANs easily forget about past generated samples by generators, incurr...

متن کامل

Selecting the Best in GANs Family: a Post Selection Inference Framework

”Which Generative Adversarial Networks (GANs) generates the most plausible images?” has been a frequently asked question among researchers. To address this problem, we first propose an incomplete U-statistics estimate of maximum mean discrepancy MMDinc to measure the distribution discrepancy between generated and real images. MMDinc enjoys the advantages of asymptotic normality, computation eff...

متن کامل

Memorization Precedes Generation: Learning Unsupervised Gans

We propose an approach to address two undesired properties of unsupervised GANs. First, since GANs use only a continuous latent distribution to embed multiple classes or clusters of a dataset, GANs often do not correctly handle the structural discontinuity between disparate classes in a latent space. Second, discriminators of GANs easily forget about past generated samples by generators, incurr...

متن کامل

Memorization Precedes Generation: Learning Unsupervised Gans

We propose an approach to address two undesired properties of unsupervised GANs. First, since GANs use only a continuous latent distribution to embed multiple classes or clusters of a dataset, GANs often do not correctly handle the structural discontinuity between disparate classes in a latent space. Second, discriminators of GANs easily forget about past generated samples by generators, incurr...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017